10 research outputs found

    LIPIcs

    Get PDF
    Graph games provide the foundation for modeling and synthesis of reactive processes. Such games are played over graphs where the vertices are controlled by two adversarial players. We consider graph games where the objective of the first player is the conjunction of a qualitative objective (specified as a parity condition) and a quantitative objective (specified as a meanpayoff condition). There are two variants of the problem, namely, the threshold problem where the quantitative goal is to ensure that the mean-payoff value is above a threshold, and the value problem where the quantitative goal is to ensure the optimal mean-payoff value; in both cases ensuring the qualitative parity objective. The previous best-known algorithms for game graphs with n vertices, m edges, parity objectives with d priorities, and maximal absolute reward value W for mean-payoff objectives, are as follows: O(nd+1 . m . w) for the threshold problem, and O(nd+2 · m · W) for the value problem. Our main contributions are faster algorithms, and the running times of our algorithms are as follows: O(nd-1 · m ·W) for the threshold problem, and O(nd · m · W · log(n · W)) for the value problem. For mean-payoff parity objectives with two priorities, our algorithms match the best-known bounds of the algorithms for mean-payoff games (without conjunction with parity objectives). Our results are relevant in synthesis of reactive systems with both functional requirement (given as a qualitative objective) and performance requirement (given as a quantitative objective)

    LNCS

    No full text
    We present an auction algorithm using multiplicative instead of constant weight updates to compute a (1−ε)-approximate maximum weight matching (MWM) in a bipartite graph with n vertices and m edges in time O(mε−1log(ε−1)), matching the running time of the linear-time approximation algorithm of Duan and Pettie [JACM ’14]. Our algorithm is very simple and it can be extended to give a dynamic data structure that maintains a (1−ε)-approximate maximum weight matching under (1) one-sided vertex deletions (with incident edges) and (2) one-sided vertex insertions (with incident edges sorted by weight) to the other side. The total time time used is O(mε−1log(ε−1)), where m is the sum of the number of initially existing and inserted edges

    PMLR

    No full text
    We study fine-grained error bounds for differentially private algorithms for counting under continual observation. Our main insight is that the matrix mechanism when using lower-triangular matrices can be used in the continual observation model. More specifically, we give an explicit factorization for the counting matrix Mcount and upper bound the error explicitly. We also give a fine-grained analysis, specifying the exact constant in the upper bound. Our analysis is based on upper and lower bounds of the completely bounded norm (cb-norm) of Mcount . Along the way, we improve the best-known bound of 28 years by Mathias (SIAM Journal on Matrix Analysis and Applications, 1993) on the cb-norm of Mcount for a large range of the dimension of Mcount. Furthermore, we are the first to give concrete error bounds for various problems under continual observation such as binary counting, maintaining a histogram, releasing an approximately cut-preserving synthetic graph, many graph-based statistics, and substring and episode counting. Finally, we note that our result can be used to get a fine-grained error bound for non-interactive local learning and the first lower bounds on the additive error for (ϵ,δ)-differentially-private counting under continual observation. Subsequent to this work, Henzinger et al. (SODA, 2023) showed that our factorization also achieves fine-grained mean-squared error

    Experimental evaluation of fully dynamic k-means via coresets

    No full text
    For a set of points in Rd, the Euclidean k-means problems consists of finding k centers such that the sum of distances squared from each data point to its closest center is minimized. Coresets are one the main tools developed recently to solve this problem in a big data context. They allow to compress the initial dataset while preserving its structure: running any algorithm on the coreset provides a guarantee almost equivalent to running it on the full data. In this work, we study coresets in a fully-dynamic setting: points are added and deleted with the goal to efficiently maintain a coreset with which a k-means solution can be computed. Based on an algorithm from Henzinger and Kale [ESA'20], we present an efficient and practical implementation of a fully dynamic coreset algorithm, that improves the running time by up to a factor of 20 compared to our non-optimized implementation of the algorithm by Henzinger and Kale, without sacrificing more than 7% on the quality of the k-means solution

    Symbolic time and space tradeoffs for probabilistic verification

    No full text
    We present a faster symbolic algorithm for the following central problem in probabilistic verification: Compute the maximal end-component (MEC) decomposition of Markov decision processes (MDPs). This problem generalizes the SCC decomposition problem of graphs and closed recurrent sets of Markov chains. The model of symbolic algorithms is widely used in formal verification and model-checking, where access to the input model is restricted to only symbolic operations (e.g., basic set operations and computation of one-step neighborhood). For an input MDP with n vertices and m edges, the classical symbolic algorithm from the 1990s for the MEC decomposition requires O(n2) symbolic operations and O(1) symbolic space. The only other symbolic algorithm for the MEC decomposition requires O(nm−−√) symbolic operations and O(m−−√) symbolic space. A main open question is whether the worst-case O(n2) bound for symbolic operations can be beaten. We present a symbolic algorithm that requires O˜(n1.5) symbolic operations and O˜(n−−√) symbolic space. Moreover, the parametrization of our algorithm provides a trade-off between symbolic operations and symbolic space: for all 0<ϵ≤1/2 the symbolic algorithm requires O˜(n2−ϵ) symbolic operations and O˜(nϵ) symbolic space ( O˜ hides poly-logarithmic factors). Using our techniques we present faster algorithms for computing the almost-sure winning regions of ω -regular objectives for MDPs. We consider the canonical parity objectives for ω -regular objectives, and for parity objectives with d -priorities we present an algorithm that computes the almost-sure winning region with O˜(n2−ϵ) symbolic operations and O˜(nϵ) symbolic space, for all 0<ϵ≤1/2

    Deterministic near-optimal approximation algorithms for dynamic set cover

    No full text
    n the dynamic minimum set cover problem, the challenge is to minimize the update time while guaranteeing a close-to-optimal min{O(log n), f} approximation factor. (Throughout, n, m, f , and C are parameters denoting the maximum number of elements, the number of sets, the frequency, and the cost range.) In the high-frequency range, when f = Ω(log n) , this was achieved by a deterministic O(log n) -approximation algorithm with O(f log n) amortized update time by Gupta et al. [Online and dynamic algorithms for set cover, in Proceedings STOC 2017, ACM, pp. 537–550]. In this paper we consider the low-frequency range, when f = O(log n) , and obtain deterministic algorithms with a (1 + ∈)f -approximation ratio and the following guarantees on the update time. (1) O ((f/∈)-log(Cn)) amortized update time: Prior to our work, the best approximation ratio guaranteed by deterministic algorithms was O(f2) of Bhattacharya, Henzinger, and Italiano [Design of dynamic algorithms via primal-dual method, in Proceedings ICALP 2015, Springer, pp. 206–218]. In contrast, the only result with O(f) -approximation was that of Abboud et al. [Dynamic set cover: Improved algorithms and lower bounds, in Proceedings STOC 2019, ACM, pp. 114–125], who designed a randomized (1+∈)f -approximation algorithm with amortized update time. (2) O(f2/∈3 + (f/∈2).logC) amortized update time: This result improves the above update time bound for most values of f in the low-frequency range, i.e., f=o(log n) . It is also the first result that is independent of m and n. It subsumes the constant amortized update time of Bhattacharya and Kulkarni [Deterministically maintaining a (2 + ∈) -approximate minimum vertex cover in O(1/∈2) amortized update time, in Proceedings SODA 2019, SIAM, pp. 1872–1885] for unweighted dynamic vertex cover (i.e., when f = 2 and C = 1). (3) O((f/∈3).log2(Cn)) worst-case update time: No nontrivial worst-case update time was previously known for the dynamic set cover problem. Our bound subsumes and improves by a logarithmic factor the O(log3n/poly (∈)) worst-case update time for the unweighted dynamic vertex cover problem (i.e., when f = 2 and C =1) of Bhattacharya, Henzinger, and Nanongkai [Fully dynamic approximate maximum matching and minimum vertex cover in O(log3)n worst case update time, in Proceedings SODA 2017, SIAM, pp. 470–489]. We achieve our results via the primal-dual approach, by maintaining a fractional packing solution as a dual certificate. Prior work in dynamic algorithms that employs the primal-dual approach uses a local update scheme that maintains relaxed complementary slackness conditions for every set. For our first result we use instead a global update scheme that does not always maintain complementary slackness conditions. For our second result we combine the global and the local update schema. To achieve our third result we use a hierarchy of background schedulers. It is an interesting open question whether this background scheduler technique can also be used to transform algorithms with amortized running time bounds into algorithms with worst-case running time bounds

    Polynomial-time algorithms for energy games with special weight structures

    No full text
    Energy games belong to a class of turn-based two-player infinite-duration games played on a weighted directed graph. It is one of the rare and intriguing combinatorial problems that lie in NP ∩ co−NP, but are not known to be in P. While the existence of polynomial-time algorithms has been a major open problem for decades, there is no algorithm that solves any non-trivial subclass in polynomial time. In this paper, we give several results based on the weight structures of the graph. First, we identify a notion of penalty and present a polynomial-time algorithm when the penalty is large. Our algorithm is the first polynomial-time algorithm on a large class of weighted graphs. It includes several counter examples that show that many previous algorithms, such as value iteration and random facet algorithms, require at least sub-exponential time. Our main technique is developing the first non-trivial approximation algorithm and showing how to convert it to an exact algorithm. Moreover, we show that in a practical case in verification where weights are clustered around a constant number of values, the energy game problem can be solved in polynomial time. We also show that the problem is still as hard as in general when the clique-width is bounded or the graph is strongly ergodic, suggesting that restricting graph structures need not help
    corecore